5,748 research outputs found

    The Impact of Dementia and Extremity Injuries on the Plasticity of Long-term Care Demand: An Analysis of Counterfactual Projection Scenarios Based on German Health Insurance Routine Data

    Get PDF
    Although demand for long-term care (LTC) in Germany is expected to increase over the coming decades, the LTC sector will struggle to provide sufficient capacity. Evaluating the impact of different risk factors on future LTC demand is necessary in order to make informed policy decisions. With regard to LTC need, dementia and lower extremity injuries (LEI) are common risk factors. Both are used to demonstrate their maximum attainable efficacy in mitigating the future increase in overall LTC need, both at home and in nursing homes. We use a multi-state projection model for which the estimation of the underlying transition and mortality rates is based on longitudinal health claims data from AOK, Germany’s largest public health insurance provider, between 2004 and 2010. We project six different scenarios of LTC for ages 75+ in Germany for the period from 2014 to 2044, including counterfactual scenarios that remove the effects of LEI, dementia, or both. Our multi-state projections distinguish between home-based and institutional LTC. Removing the effect of LTC risk factors mitigates the increase in total LTC demand and postpones demand until a later age. Removing dementia markedly shifts future care demand from institutional LTC to LTC at home and even increases demand for LTC at home at older ages beyond the baseline projection due to the dual function of dementia as a risk factor for both LTC demand and mortality. Removing LEI has less of an effect on overall and sectoral LTC demand. Removing both risk factors at the same time results in the greatest impact, which is even more marked than that of both individual scenarios combined, thus indicating a synergistic relationship between dementia and LEI on LTC risk. The type of LTC demand (home-based or institutional) shows considerable plasticity when specific risk factors are removed. We demonstrate the degree to which LTC demand can be affected in favour of LTC at home, using dementia and LEI as examples of potentially modifiable risk factors, and thus show how the efficacy of potential intervention targets for policy-makers can be assessed. This study provides evidence on the degree of plasticity of future long-term care demand at home and in institutions that would hypothetically be attainable when completely removing specific cognitive or physical risk factors of care need (dementia or lower EI). It is based on large-scale health claims data, which contain longitudinal individual level data on morbidity and long-term care status. A close link exists between the cognitive risk factor of dementia and the type of LTC, as its absence shifts care demand to home-based care at older ages. The study also demonstrates the usefulness of counterfactual projections based on health claims data in assessing the hypothetical maximum efficacy of different intervention strategies. Online Data Appendix: https://doi.org/10.12765/CPoS-2019-20e

    The Impact of Dementia and Extremity Injuries on the Plasticity of Long-term Care Demand: An Analysis of Counterfactual Projection Scenarios Based on German Health Insurance Routine Data

    Get PDF
    Although demand for long-term care (LTC) in Germany is expected to increase over the coming decades, the LTC sector will struggle to provide sufficient capacity. Evaluating the impact of different risk factors on future LTC demand is necessary in order to make informed policy decisions. With regard to LTC need, dementia and lower extremity injuries (LEI) are common risk factors. Both are used to demonstrate their maximum attainable efficacy in mitigating the future increase in overall LTC need, both at home and in nursing homes.We use a multi-state projection model for which the estimation of the underlying transition and mortality rates is based on longitudinal health claims data from AOK, Germany’s largest public health insurance provider, between 2004 and 2010. We project six different scenarios of LTC for ages 75+ in Germany for the period from 2014 to 2044, including counterfactual scenarios that remove the effects of LEI, dementia, or both. Our multi-state projections distinguish between home-based and institutional LTC.Removing the effect of LTC risk factors mitigates the increase in total LTC demand and postpones demand until a later age. Removing dementia markedly shifts future care demand from institutional LTC to LTC at home and even increases demand for LTC at home at older ages beyond the baseline projection due to the dual function of dementia as a risk factor for both LTC demand and mortality. Removing LEI has less of an effect on overall and sectoral LTC demand. Removing both risk factors at the same time results in the greatest impact, which is even more marked than that of both individual scenarios combined, thus indicating a synergistic relationship between dementia and LEI on LTC risk.The type of LTC demand (home-based or institutional) shows considerable plasticity when specific risk factors are removed. We demonstrate the degree to which LTC demand can be affected in favour of LTC at home, using dementia and LEI as examples of potentially modifiable risk factors, and thus show how the efficacy of potential intervention targets for policy-makers can be assessed.This study provides evidence on the degree of plasticity of future long-term care demand at home and in institutions that would hypothetically be attainable when completely removing specific cognitive or physical risk factors of care need (dementia or lower EI). It is based on large-scale health claims data, which contain longitudinal individual level data on morbidity and long-term care status. A close link exists between the cognitive risk factor of dementia and the type of LTC, as its absence shifts care demand to home-based care at older ages. The study also demonstrates the usefulness of counterfactual projections based on health claims data in assessing the hypothetical maximum efficacy of different intervention strategies

    Indirect adaptive higher-order sliding-mode control using the certainty-equivalence principle

    Get PDF
    Seit den 50er Jahren werden große Anstrengungen unternommen, Algorithmen zu entwickeln, welche in der Lage sind Unsicherheiten und Störungen in Regelkreisen zu kompensieren. Früh wurden hierzu adaptive Verfahren, die eine kontinuierliche Anpassung der Reglerparameter vornehmen, genutzt, um die Stabilisierung zu ermöglichen. Die fortlaufende Modifikation der Parameter sorgt dabei dafür, dass strukturelle Änderungen im Systemmodell sich nicht auf die Regelgüte auswirken. Eine deutlich andere Herangehensweise wird durch strukturvariable Systeme, insbesondere die sogenannte Sliding-Mode Regelung, verfolgt. Hierbei wird ein sehr schnell schaltendes Stellsignal für die Kompensation auftretender Störungen und Modellunsicherheiten so genutzt, dass bereits ohne besonderes Vorwissen über die Störeinflüsse eine beachtliche Regelgüte erreicht werden kann. Die vorliegende Arbeit befasst sich mit dem Thema, diese beiden sehr unterschiedlichen Strategien miteinander zu verbinden und dabei die Vorteile der ursprünglichen Umsetzung zu erhalten. So benötigen Sliding-Mode Verfahren generell nur wenige Informationen über die Störung, zeigen jedoch Defizite bei Unsicherheiten, die vom Systemzustand abhängen. Auf der anderen Seite können adaptive Regelungen sehr gut parametrische Unsicherheiten kompensieren, wohingegen unmodellierte Störungen zu einer verschlechterten Regelgüte führen. Ziel dieser Arbeit ist es daher, eine kombinierte Entwurfsmethodik zu entwickeln, welche die verfügbaren Informationen über die Störeinflüsse bestmöglich ausnutzt. Hierbei wird insbesondere Wert auf einen theoretisch fundierten Stabilitätsnachweis gelegt, welcher erst durch Erkenntnisse der letzten Jahre im Bereich der Lyapunov-Theorie im Zusammenhang mit Sliding-Mode ermöglicht wurde. Anhand der gestellten Anforderungen werden Regelalgorithmen entworfen, die eine Kombination von Sliding-Mode Reglern höherer Ordnung und adaptiven Verfahren darstellen. Neben den theoretischen Betrachtungen werden die Vorteile des Verfahrens auch anhand von Simulationsbeispielen und eines Laborversuchs nachgewiesen. Es zeigt sich hierbei, dass die vorgeschlagenen Algorithmen eine Verbesserung hinsichtlich der Regelgüte als auch der Robustheit gegenüber den konventionellen Verfahren erzielen.Since the late 50s, huge efforts have been made to improve the control algorithms that are capable of compensating for uncertainties and disturbances. Adaptive controllers that adjust their parameters continuously have been used from the beginning to solve this task. This adaptation of the controller allows to maintain a constant performance even under changing conditions. A different idea is proposed by variable structure systems, in particular by the so-called sliding-mode control. The idea is to employ a very fast switching signal to compensate for disturbances or uncertainties. This thesis deals with a combination of these two rather different approaches while preserving the advantages of each method. The design of a sliding-mode controller normally does not demand sophisticated knowledge about the disturbance, while the controller's robustness against state-dependent uncertainties might be poor. On the other hand, adaptive controllers are well suited to compensate for parametric uncertainties while unstructured influence may result in a degraded performance. Hence, the objective of this work is to design sliding-mode controllers that use as much information about the uncertainty as possible and exploit this knowledge in the design. An important point is that the design procedure is based on a rigorous proof of the stability of the combined approach. Only recent results on Lyapunov theory in the field of sliding-mode made this analysis possible. It is shown that the Lyapunov function of the nominal sliding-mode controller has a direct impact on the adaptation law. Therefore, this Lyapunov function has to meet certain conditions in order to allow a proper implementation of the proposed algorithms. The main contributions of this thesis are sliding-mode controllers, extended by an adaptive part using the certainty-equivalence principle. It is shown that the combination of both approaches results in a novel controller design that is able to solve a control task even in the presence of different classes of uncertainties. In addition to the theoretical analysis, the advantages of the proposed method are demonstrated in a selection of simulation examples and on a laboratory test-bench. The experiments show that the proposed control algorithm delivers better performance in regard to chattering and robustness compared to classical sliding-mode controllers

    Serial Correlations in Single-Subject fMRI with Sub-Second TR

    Full text link
    When performing statistical analysis of single-subject fMRI data, serial correlations need to be taken into account to allow for valid inference. Otherwise, the variability in the parameter estimates might be under-estimated resulting in increased false-positive rates. Serial correlations in fMRI data are commonly characterized in terms of a first-order autoregressive (AR) process and then removed via pre-whitening. The required noise model for the pre-whitening depends on a number of parameters, particularly the repetition time (TR). Here we investigate how the sub-second temporal resolution provided by simultaneous multislice (SMS) imaging changes the noise structure in fMRI time series. We fit a higher-order AR model and then estimate the optimal AR model order for a sequence with a TR of less than 600 ms providing whole brain coverage. We show that physiological noise modelling successfully reduces the required AR model order, but remaining serial correlations necessitate an advanced noise model. We conclude that commonly used noise models, such as the AR(1) model, are inadequate for modelling serial correlations in fMRI using sub-second TRs. Rather, physiological noise modelling in combination with advanced pre-whitening schemes enable valid inference in single-subject analysis using fast fMRI sequences

    Vehicle Tracking and Motion Estimation Based on Stereo Vision Sequences

    Get PDF
    In this dissertation, a novel approach for estimating trajectories of road vehicles such as cars, vans, or motorbikes, based on stereo image sequences is presented. Moving objects are detected and reliably tracked in real-time from within a moving car. The resulting information on the pose and motion state of other moving objects with respect to the own vehicle is an essential basis for future driver assistance and safety systems, e.g., for collision prediction. The focus of this contribution is on oncoming traffic, while most existing work in the literature addresses tracking the lead vehicle. The overall approach is generic and scalable to a variety of traffic scenes including inner city, country road, and highway scenarios. A considerable part of this thesis addresses oncoming traffic at urban intersections. The parameters to be estimated include the 3D position and orientation of an object relative to the ego-vehicle, as well as the object's shape, dimension, velocity, acceleration and the rotational velocity (yaw rate). The key idea is to derive these parameters from a set of tracked 3D points on the object's surface, which are registered to a time-consistent object coordinate system, by means of an extended Kalman filter. Combining the rigid 3D point cloud model with the dynamic model of a vehicle is one main contribution of this thesis. Vehicle tracking at intersections requires covering a wide range of different object dynamics, since vehicles can turn quickly. Three different approaches for tracking objects during highly dynamic turn maneuvers up to extreme maneuvers such as skidding are presented and compared. These approaches allow for an online adaptation of the filter parameter values, overcoming manual parameter tuning depending on the dynamics of the tracked object in the scene. This is the second main contribution. Further issues include the introduction of two initialization methods, a robust outlier handling, a probabilistic approach for assigning new points to a tracked object, as well as mid-level fusion of the vision-based approach with a radar sensor. The overall system is systematically evaluated both on simulated and real-world data. The experimental results show the proposed system is able to accurately estimate the object pose and motion parameters in a variety of challenging situations, including night scenes, quick turn maneuvers, and partial occlusions. The limits of the system are also carefully investigated.In dieser Dissertation wird ein Ansatz zur Trajektorienschätzung von Straßenfahrzeugen (PKW, Lieferwagen, Motorräder,...) anhand von Stereo-Bildfolgen vorgestellt. Bewegte Objekte werden in Echtzeit aus einem fahrenden Auto heraus automatisch detektiert, vermessen und deren Bewegungszustand relativ zum eigenen Fahrzeug zuverlässig bestimmt. Die gewonnenen Informationen liefern einen entscheidenden Grundstein für zukünftige Fahrerassistenz- und Sicherheitssysteme im Automobilbereich, beispielsweise zur Kollisionsprädiktion. Während der Großteil der existierenden Literatur das Detektieren und Verfolgen vorausfahrender Fahrzeuge in Autobahnszenarien adressiert, setzt diese Arbeit einen Schwerpunkt auf den Gegenverkehr, speziell an städtischen Kreuzungen. Der Ansatz ist jedoch grundsätzlich generisch und skalierbar für eine Vielzahl an Verkehrssituationen (Innenstadt, Landstraße, Autobahn). Die zu schätzenden Parameter beinhalten die räumliche Lage des anderen Fahrzeugs relativ zum eigenen Fahrzeug, die Objekt-Geschwindigkeit und -Längsbeschleunigung, sowie die Rotationsgeschwindigkeit (Gierrate) des beobachteten Objektes. Zusätzlich werden die Objektabmaße sowie die Objektform rekonstruiert. Die Grundidee ist es, diese Parameter anhand der Transformation von beobachteten 3D Punkten, welche eine ortsfeste Position auf der Objektoberfläche besitzen, mittels eines rekursiven Schätzers (Kalman Filter) zu bestimmen. Ein wesentlicher Beitrag dieser Arbeit liegt in der Kombination des Starrkörpermodells der Punktewolke mit einem Fahrzeugbewegungsmodell. An Kreuzungen können sehr unterschiedliche Dynamiken auftreten, von einer Geradeausfahrt mit konstanter Geschwindigkeit bis hin zum raschen Abbiegen. Um eine manuelle Parameteradaption abhängig von der jeweiligen Szene zu vermeiden, werden drei verschiedene Ansätze zur automatisierten Anpassung der Filterparameter an die vorliegende Situation vorgestellt und verglichen. Dies stellt den zweiten Hauptbeitrag der Arbeit dar. Weitere wichtige Beiträge sind zwei alternative Initialisierungsmethoden, eine robuste Ausreißerbehandlung, ein probabilistischer Ansatz zur Zuordnung neuer Objektpunkte, sowie die Fusion des bildbasierten Verfahrens mit einem Radar-Sensor. Das Gesamtsystem wird im Rahmen dieser Arbeit systematisch anhand von simulierten und realen Straßenverkehrsszenen evaluiert. Die Ergebnisse zeigen, dass das vorgestellte Verfahren in der Lage ist, die unbekannten Objektparameter auch unter schwierigen Umgebungsbedingungen, beispielsweise bei Nacht, schnellen Abbiegemanövern oder unter Teilverdeckungen, sehr präzise zu schätzen. Die Grenzen des Systems werden ebenfalls sorgfältig untersucht

    Correcting Biases in a lower resolution global circulation model with data assimilation

    Full text link
    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model’s equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model’s equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d’études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented

    Composition Conditions for Classes of Analytic Functions

    Full text link
    We prove that for classes of analytic functions tree composition condition and composition condition coincide.Comment: 13 page

    Fair tax evasion

    Get PDF
    In this paper we analyse how fairness considerations, in particular considerations of just income distribution, affect whether or not people believe tax evasion can be justified and their willingness to engage in tax evasion. Using data from the Norwegian “Hidden Labour Market Survey” we show that individuals with low wages or long working hours, individuals that are treated unfairly by most tax systems, have a higher probability of justifying tax evasion. The same individuals are also more willing to take home income without reporting it to the tax authorities. These results are consistent with a model in which individuals make a trade-off between economic gains and fairness considerations when they make decisions about tax evasion. Taken together our results suggest that considerations of fair income distribution are important for the analysis of tax evasion

    Comparing fragmentation of strange quark in Z decays and K+p reactions

    Get PDF
    The ratios of the production rates K*(892)/K, phi/K, rho^0/pi, omega/pi, Delta^{++}/p, Sigma+(1385)/Lambda, Xi^-/Lambda and their x_p dependences obtained from results of the LEP and SLD experiments in Z hadronic decays are analysed. The corresponding ratios for promptly produced mesons are estimated at x_p -> 1. A comparison of the LEP results with those from the Mirabelle and BEBC K+p experiments at 32 and 70 GeV/c shows striking similarity in fragmentation of the strange valence quark of the incident K+ and strange quarks produced in Z decays. The JETSET model describes the LEP, Mirabelle and BEBC results. The model of Pei is consistent with the data for mesons, but presumably underestimates the fractions of primary octet baryons. The quark combinatorics model of Anisovich et al.is incompatible with the data.Comment: tex and sty files and 6 eps-figure
    corecore